78 research outputs found

    Phase space sampling and operator confidence with generative adversarial networks

    Full text link
    We demonstrate that a generative adversarial network can be trained to produce Ising model configurations in distinct regions of phase space. In training a generative adversarial network, the discriminator neural network becomes very good a discerning examples from the training set and examples from the testing set. We demonstrate that this ability can be used as an anomaly detector, producing estimations of operator values along with a confidence in the prediction

    Deep neural networks for direct, featureless learning through observation: the case of 2d spin models

    Full text link
    We demonstrate the capability of a convolutional deep neural network in predicting the nearest-neighbor energy of the 4x4 Ising model. Using its success at this task, we motivate the study of the larger 8x8 Ising model, showing that the deep neural network can learn the nearest-neighbor Ising Hamiltonian after only seeing a vanishingly small fraction of configuration space. Additionally, we show that the neural network has learned both the energy and magnetization operators with sufficient accuracy to replicate the low-temperature Ising phase transition. We then demonstrate the ability of the neural network to learn other spin models, teaching the convolutional deep neural network to accurately predict the long-range interaction of a screened Coulomb Hamiltonian, a sinusoidally attenuated screened Coulomb Hamiltonian, and a modified Potts model Hamiltonian. In the case of the long-range interaction, we demonstrate the ability of the neural network to recover the phase transition with equivalent accuracy to the numerically exact method. Furthermore, in the case of the long-range interaction, the benefits of the neural network become apparent; it is able to make predictions with a high degree of accuracy, and do so 1600 times faster than a CUDA-optimized exact calculation. Additionally, we demonstrate how the neural network succeeds at these tasks by looking at the weights learned in a simplified demonstration

    Sampling algorithms for validation of supervised learning models for Ising-like systems

    Full text link
    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ID-MH that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new "block-ID" sampling strategy: it decomposes the given structure into square blocks with lattice dimension no greater than 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).Comment: 43 pages and 16 figure

    A note on the metallization of compressed liquid hydrogen

    Full text link
    We examine the molecular-atomic transition in liquid hydrogen as it relates to metallization. Pair potentials are obtained from first principles molecular dynamics and compared with potentials derived from quadratic response. The results provide insight into the nature of covalent bonding under extreme conditions. Based on this analysis, we construct a schematic dissociation-metallization phase diagram and suggest experimental approaches that should significantly reduce the pressures necessary for the realization of the elusive metallic phase of hydrogen.Comment: 11 pages, 4 figure

    Neuroevolutionary learning of particles and protocols for self-assembly

    Full text link
    Within simulations of molecules deposited on a surface we show that neuroevolutionary learning can design particles and time-dependent protocols to promote self-assembly, without input from physical concepts such as thermal equilibrium or mechanical stability and without prior knowledge of candidate or competing structures. The learning algorithm is capable of both directed and exploratory design: it can assemble a material with a user-defined property, or search for novelty in the space of specified order parameters. In the latter mode it explores the space of what can be made rather than the space of structures that are low in energy but not necessarily kinetically accessible

    Structure and phase boundaries of compressed liquid hydrogen

    Full text link
    We have mapped the molecular-atomic transition in liquid hydrogen using first principles molecular dynamics. We predict that a molecular phase with short-range orientational order exists at pressures above 100 GPa. The presence of this ordering and the structure emerging near the dissociation transition provide an explanation for the sharpness of the molecular-atomic crossover and the concurrent pressure drop at high pressures. Our findings have non-trivial implications for simulations of hydrogen; previous equation of state data for the molecular liquid may require revision. Arguments for the possibility of a 1st1^{st} order liquid-liquid transition are discussed

    Controlled Online Optimization Learning (COOL): Finding the ground state of spin Hamiltonians with reinforcement learning

    Full text link
    Reinforcement learning (RL) has become a proven method for optimizing a procedure for which success has been defined, but the specific actions needed to achieve it have not. We apply the so-called "black box" method of RL to what has been referred as the "black art" of simulated annealing (SA), demonstrating that an RL agent based on proximal policy optimization can, through experience alone, arrive at a temperature schedule that surpasses the performance of standard heuristic temperature schedules for two classes of Hamiltonians. When the system is initialized at a cool temperature, the RL agent learns to heat the system to "melt" it, and then slowly cool it in an effort to anneal to the ground state; if the system is initialized at a high temperature, the algorithm immediately cools the system. We investigate the performance of our RL-driven SA agent in generalizing to all Hamiltonians of a specific class; when trained on random Hamiltonians of nearest-neighbour spin glasses, the RL agent is able to control the SA process for other Hamiltonians, reaching the ground state with a higher probability than a simple linear annealing schedule. Furthermore, the scaling performance (with respect to system size) of the RL approach is far more favourable, achieving a performance improvement of one order of magnitude on L=14x14 systems. We demonstrate the robustness of the RL approach when the system operates in a "destructive observation" mode, an allusion to a quantum system where measurements destroy the state of the system. The success of the RL agent could have far-reaching impact, from classical optimization, to quantum annealing, to the simulation of physical systems
    • …
    corecore